13 research outputs found

    Detecting Biological Motion for Human-Robot Interaction: A Link between Perception and Action

    Get PDF
    One of the fundamental skills supporting safe and comfortable interaction between humans is their capability to understand intuitively each other's actions and intentions. At the basis of this ability is a special-purpose visual processing that human brain has developed to comprehend human motion. Among the first "building blocks" enabling the bootstrapping of such visual processing is the ability to detect movements performed by biological agents in the scene, a skill mastered by human babies in the first days of their life. In this paper, we present a computational model based on the assumption that such visual ability must be based on local low-level visual motion features, which are independent of shape, such as the configuration of the body and perspective. Moreover, we implement it on the humanoid robot iCub, embedding it into a software architecture that leverages the regularities of biological motion also to control robot attention and oculomotor behaviors. In essence, we put forth a model in which the regularities of biological motion link perception and action enabling a robotic agent to follow a human-inspired sensory-motor behavior. We posit that this choice facilitates mutual understanding and goal prediction during collaboration, increasing the pleasantness and safety of the interactio

    Modeling Visual Features to Recognize Biological Motion: A Developmental Approach

    Get PDF
    In this work we deal with the problem of designing and developing computational vision models – comparable to the early stages of the human development – using coarse low-level information. More specifically, we consider a binary classification setting to characterize biological movements with respect to non-biological dynamic events. To this purpose, our model builds on top of the optical flow estimation, and abstract the representation to simulate the limited amount of visual information available at birth. We take inspiration from known biological motion regularities explained by the Two-Thirds Power Law, and design a motion representation that includes different low-level features, which can be interpreted as the computational counterpart of the elements involved in the law. Our reference application is human-machine interaction, thus the experimental analysis is conducted on a set of videos depicting two different subjects performing a repertoire of dynamic gestures typical of such a setting (e.g. lifting an object, pointing, ...). Two slightly different viewpoints are considered. The contribution of our work is twofold. First, we show that the effects of the Two-Thirds Power Law can be appreciates on a video analysis setting. Second, we prove that, although the coarse motion representation, our model allows us to reach biological motion classification performances (around 89%) which are reminiscent of the abilities of very young babies. Moreover, our model shows tolerance to view-point changes

    human motion understanding for selecting action timing in collaborative human robot interaction

    Get PDF
    In the industry of the future, so as in healthcare and at home, robots will be a familiar presence. Since they will be working closely with human operators not always properly trained for human-machine interaction tasks, robots will need the ability of automatically adapting to changes in the task to be performed or to cope with variations in how the human partner completes the task. The goal of this work is to make a further step toward endowing robot with such capability. To this purpose, we focus on the identification of relevant time instants in an observed action, called dynamic instants, informative on the partner's movement timing, and marking instants where an action starts or ends, or changes to another actions. The time instants are temporal locations where the motion can be ideally segmented, providing a set of primitives that can be used to build a temporal signature of the action and finally support the understanding of the dynamics and coordination in time. We validate our approach in two contexts, considering first a situation in which the human partner can perform multiple different activities, and then moving to settings where an action is already recognized and shows a certain degree of periodicity. In the two contexts we address different challenges. In the first one, working in batch on a dataset collecting videos of a variety of cooking activities, we investigate whether the action signature we compute could facilitate the understanding of which type of action is occurring in front of the observer, with tolerance to viewpoint changes. In the second context, we evaluate online on the robot iCub the capability of the action signature in providing hints to establish an actual temporal coordination during the interaction with human participants. In both cases, we show promising results that speak in favour of the potentiality of our approach

    An adaptive robot teacher boosts a human partner’s learning performance in joint action

    Get PDF
    One important challenge for roboticists in the coming years will be to design robots to teach humans new skills or to lead humans in activities which require sustained motivation (e.g. physiotherapy, skills training). In the current study, we tested the hypothesis that if a robot teacher invests physical effort in adapting to a human learner in a context in which the robot is teaching the human a new skill, this would facilitate the human's learning. We also hypothesized that the robot teacher's effortful adaptation would lead the human learner to experience greater rapport in the interaction. To this end, we devised a scenario in which the iCub and a human participant alternated in teaching each other new skills. In the high effort condition, the iCub slowed down his movements when repeating a demonstration for the human learner, whereas in the low effort condition he sped the movements up when repeating the demonstration. The results indicate that participants indeed learned more effectively when the iCub adapted its demonstrations, and that the iCub's apparently effortful adaptation led participants to experience him as more helpful

    A humanoid robot’s effortful adaptation boosts partners’ commitment to an interactive teaching task

    Get PDF
    We tested the hypothesis that, if a robot apparently invests effort in teaching a new skill to a human participant, the human participant will reciprocate by investing more effort in teaching the robot a new skill, too. To this end, we devised a scenario in which the iCub and a human participant alternated in teaching each other new skills. In the Adaptive condition of the robot teaching phase , the iCub slowed down its movements when repeating a demonstration for the human learner, whereas in the Unadaptive condition it sped the movements up when repeating the demonstration. In a subsequent participant teaching phase , human participants were asked to give the iCub a demonstration, and then to repeat it if the iCub had not understood. We predicted that in the Adaptive condition , participants would reciprocate the iCub’s adaptivity by investing more effort to slow down their movements and to increase segmentation when repeating their demonstration. The results showed that this was true when participants experienced the Adaptive condition after the Unadaptive condition and not when the order was inverted, indicating that participants were particularly sensitive to the changes in the iCub’s level of commitment over the course of the experiment

    End-effector pose estimation of the Monash Epicyclic-Parallel Manipulator through the visual observation of its legs

    No full text
    International audiencePast research works have shown that it was possible to evaluate the end-effector pose of parallel robots by vision. First of all, it was proposed to directly observed the end-effector. However, this observation may be not possible (e.g. in the case of a haptic device for which the end-effector may be hidden by the user hand). Therefore, it has been proposed another type of end-effector pose estimation based on the observation of the directions of the legs. Even interesting results were obtained, this method is not suitable for some particular parallel robot families (e.g. the Monash Epicyclic-Parallel Manipulator, MEPaM). This paper proposes a new approach for the estimation of the end-effector pose: by observing the mechanism legs, it is possible to extract the PlĂĽcker coordinates of their lines and determine the end-effector pose. The new end-effector pose estimation method is applied to the MEPaM. All results are validated on a MEPaM simulator created using ADAMS/Controls and interfaced with Matlab/Simulink

    Comparative Analysis of two Types of Leg-observation-based Visual Servoing Approaches for the Control of the Five-bar Mechanism

    No full text
    International audiencePast research works have proven that the robot end-effector pose of parallel mechanisms can be effectively estimated by vision. For parallel robots, it was previously proposed to directly observe the end-effector. However, this observation may be not possible (e.g. if the robot is milling). Therefore, it has been proposed to use another type of controller based on the observation of the leg directions. Despite interesting results, this controller involves the presence of mapping singularities inside the robot workspace (near which the accuracy is poor). This paper presents a new approach for vision-based control of the end-effector: by observing the mechanism legs, it is possible to extract the PlĂĽcker coordinates of their lines and control the end-effector pose. In this paper it is also presented a comparison between the previous approach based on the leg direction and this new approach based on the leg line PlĂĽcker coordinates. The new approach can be applied to a family of parallel machines for which the previous approach is not suitable and has also some advantages regarding the workspace of applicability. The simulation results of both the controllers applied on a five-bar mechanism are presented
    corecore